497 research outputs found

    Advanced Simulation of Quantum Computations

    Full text link
    Quantum computation is a promising emerging technology which, compared to conventional computation, allows for substantial speed-ups e.g. for integer factorization or database search. However, since physical realizations of quantum computers are in their infancy, a significant amount of research in this domain still relies on simulations of quantum computations on conventional machines. This causes a significant complexity which current state-of-the-art simulators try to tackle with a rather straight forward array-based representation and by applying massive hardware power. There also exist solutions based on decision diagrams (i.e. graph-based approaches) that try to tackle the exponential complexity by exploiting redundancies in quantum states and operations. However, these existing approaches do not fully exploit redundancies that are actually present. In this work, we revisit the basics of quantum computation, investigate how corresponding quantum states and quantum operations can be represented even more compactly, and, eventually, simulated in a more efficient fashion. This leads to a new graph-based simulation approach which outperforms state-of-the-art simulators (array-based as well as graph-based). Experimental evaluations show that the proposed solution is capable of simulating quantum computations for more qubits than before, and in significantly less run-time (several magnitudes faster compared to previously proposed simulators). An implementation of the proposed simulator is publicly available online at http://iic.jku.at/eda/research/quantum_simulation

    Synthesis of Reversible Functions Beyond Gate Count and Quantum Cost

    Full text link
    Many synthesis approaches for reversible and quantum logic have been proposed so far. However, most of them generate circuits with respect to simple metrics, i.e. gate count or quantum cost. On the other hand, to physically realize reversible and quantum hardware, additional constraints exist. In this paper, we describe cost metrics beyond gate count and quantum cost that should be considered while synthesizing reversible and quantum logic for the respective target technologies. We show that the evaluation of a synthesis approach may differ if additional costs are applied. In addition, a new cost metric, namely Nearest Neighbor Cost (NNC) which is imposed by realistic physical quantum architectures, is considered in detail. We discuss how existing synthesis flows can be extended to generate optimal circuits with respect to NNC while still keeping the quantum cost small.Comment: 16 pages, 6 figures, 4 tables, International Workshop on Logic and Synthesi

    NISQ circuit compilation is the travelling salesman problem on a torus

    Full text link
    Noisy, intermediate-scale quantum (NISQ) computers are expected to execute quantum circuits of up to a few hundred qubits. The circuits have to conform to NISQ architectural constraints regarding qubit allocation and the execution of multi-qubit gates. Quantum circuit compilation (QCC) takes a nonconforming circuit and outputs a compatible circuit. Can classical optimisation methods be used for QCC? Compilation is a known combinatorial problem shown to be solvable by two types of operations: 1) qubit allocation, and 2) gate scheduling. We show informally that the two operations form a discrete ring. The search landscape of QCC is a two dimensional discrete torus where vertices represent configurations of how circuit qubits are allocated to NISQ registers. Torus edges are weighted by the cost of scheduling circuit gates. The novelty of our approach uses the fact that a circuit's gate list is circular: compilation can start from any gate as long as all the gates will be processed, and the compiled circuit has the correct gate order. By showing that QCC can be solved as a travelling salesman problem, we bridge a theoretical and practical gap between classical circuit design automation and the emerging field of quantum circuit optimisation.Comment: rewritten. added torus. showing similarity with ts

    An Efficient Methodology for Mapping Quantum Circuits to the IBM QX Architectures

    Full text link
    In the past years, quantum computers more and more have evolved from an academic idea to an upcoming reality. IBM's project IBM Q can be seen as evidence of this progress. Launched in March 2017 with the goal to provide access to quantum computers for a broad audience, this allowed users to conduct quantum experiments on a 5-qubit and, since June 2017, also on a 16-qubit quantum computer (called IBM QX2 and IBM QX3, respectively). Revised versions of these 5-qubit and 16-qubit quantum computers (named IBM QX4 and IBM QX5, respectively) are available since September 2017. In order to use these, the desired quantum functionality (e.g. provided in terms of a quantum circuit) has to be properly mapped so that the underlying physical constraints are satisfied - a complex task. This demands solutions to automatically and efficiently conduct this mapping process. In this paper, we propose a methodology which addresses this problem, i.e. maps the given quantum functionality to a realization which satisfies all constraints given by the architecture and, at the same time, keeps the overhead in terms of additionally required quantum gates minimal. The proposed methodology is generic, can easily be configured for similar future architectures, and is fully integrated into IBM's SDK. Experimental evaluations show that the proposed approach clearly outperforms IBM's own mapping solution. In fact, for many quantum circuits, the proposed approach determines a mapping to the IBM architecture within minutes, while IBM's solution suffers from long runtimes and runs into a timeout of 1 hour in several cases. As an additional benefit, the proposed approach yields mapped circuits with smaller costs (i.e. fewer additional gates are required). All implementations of the proposed methodology is publicly available at http://iic.jku.at/eda/research/ibm_qx_mapping

    Design Automation for Adiabatic Circuits

    Full text link
    Adiabatic circuits are heavily investigated since they allow for computations with an asymptotically close to zero energy dissipation per operation - serving as an alternative technology for many scenarios where energy efficiency is preferred over fast execution. Their concepts are motivated by the fact that the information lost from conventional circuits results in an entropy increase which causes energy dissipation. To overcome this issue, computations are performed in a (conditionally) reversible fashion which, additionally, have to satisfy switching rules that are different from conventional circuitry - crying out for dedicated design automation solutions. While previous approaches either focus on their electrical realization (resulting in small, hand-crafted circuits only) or on designing fully reversible building blocks (an unnecessary overhead), this work aims for providing an automatic and dedicated design scheme that explicitly takes the recent findings in this domain into account. To this end, we review the theoretical and technical background of adiabatic circuits and present automated methods that dedicatedly realize the desired function as an adiabatic circuit. The resulting methods are further optimized - leading to an automatic and efficient design automation for this promising technology. Evaluations confirm the benefits and applicability of the proposed solution

    Online Scheduled Execution of Quantum Circuits Protected by Surface Codes

    Full text link
    Quantum circuits are the preferred formalism for expressing quantum information processing tasks. Quantum circuit design automation methods mostly use a waterfall approach and consider that high level circuit descriptions are hardware agnostic. This assumption has lead to a static circuit perspective: the number of quantum bits and quantum gates is determined before circuit execution and everything is considered reliable with zero probability of failure. Many different schemes for achieving reliable fault-tolerant quantum computation exist, with different schemes suitable for different architectures. A number of large experimental groups are developing architectures well suited to being protected by surface quantum error correcting codes. Such circuits could include unreliable logical elements, such as state distillation, whose failure can be determined only after their actual execution. Therefore, practical logical circuits, as envisaged by many groups, are likely to have a dynamic structure. This requires an online scheduling of their execution: one knows for sure what needs to be executed only after previous elements have finished executing. This work shows that scheduling shares similarities with place and route methods. The work also introduces the first online schedulers of quantum circuits protected by surface codes. The work also highlights scheduling efficiency by comparing the new methods with state of the art static scheduling of surface code protected fault-tolerant circuits.Comment: accepted in QI

    Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact

    Full text link
    It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available online, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed

    Reliable quantum circuits have defects

    Full text link
    State of the art quantum computing architectures are founded on the decision to use scalable but faulty quantum hardware in conjunction with an efficient error correcting code capable of tolerating high error rates. The promised effect of this decision is that the first large-scale practical quantum computer is within reach. Coming to grips with the strategy and the challenges of preparing reliable executions of an arbitrary quantum computation is not difficult. Moreover, the article explains why defects are good.Comment: preprint of the paper from XRD

    Advanced Simulation of Droplet Microfluidics

    Full text link
    The complexity of droplet microfluidics grows by implementing parallel processes and multiple functionalities on a single device. This poses a challenge to the engineer designing the microfluidic networks. In today's design processes, the engineer relies on calculations, assumptions, simplifications, as well as his/her experiences and intuitions. In order to validate the obtained specification of the microfluidic network, usually a prototype is fabricated and physical experiments are conducted thus far. In case the design does not implement the desired functionality, this prototyping iteration is repeated - obviously resulting in an expensive and time-consuming design process. In order to avoid unnecessary prototyping loops, simulation methods could help to validate the specification of the microfluidic network before any prototype is fabricated. However, state-of-the-art simulation tools come with severe limitations, which prevent their utilization for practically-relevant applications. More precisely, they are often not dedicated to droplet microfluidics, cannot handle the required physical phenomena, are not publicly available, and can hardly be extended. In this work, we present an advanced simulation approach for droplet microfluidics which addresses these shortcomings and, eventually, allows to simulate practically-relevant applications. To this end, we propose a simulation framework which directly works on the specification of the design, supports essential physical phenomena, is publicly available, and easy to extend. Evaluations and case studies demonstrate the benefits of the proposed simulator: While current state-of-the-art tools were not applicable for practically-relevant microfluidic networks, the proposed solution allows to reduce the design time and costs e.g. of a drug screening device from one person month and USD 1200, respectively, to just a fraction of that.Comment: preprin

    Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration

    Full text link
    Sample preparation is an indispensable component of almost all biochemical protocols, and it involves, among others, making dilutions and mixtures of fluids in certain ratios. Recent microfluidic technologies offer suitable platforms for automating dilutions on-chip, and typically on a digital microfluidic biochip (DMFB), a sequence of (1:1) mix-split operations is performed on fluid droplets to achieve the target concentration factor (CF) of a sample. An (1:1) mixing model ideally comprises mixing of two unit-volume droplets followed by a (balanced) splitting into two unit-volume daughter-droplets. However, a major source of error in fluidic operations is due to unbalanced splitting, where two unequal-volume droplets are produced following a split. Such volumetric split-errors occurring in different mix-split steps of the reaction path often cause a significant drift in the target-CF of the sample, the precision of which cannot be compromised in life-critical assays. In order to circumvent this problem, several error-recovery or error-tolerant techniques have been proposed recently for DMFBs. Unfortunately, the impact of such fluidic errors on a target-CF and the dynamics of their behavior have not yet been rigorously analyzed. In this work, we investigate the effect of multiple volumetric split-errors on various target-CFs during sample preparation. We also perform a detailed analysis of the worst-case scenario, i.e., the condition when the error in a target-CF is maximized. This analysis may lead to the development of new techniques for error-tolerant sample preparation with DMFBs without using any sensing operation.Comment: 11 pages, 17 figure
    corecore